Face Generation

In this project, you'll use generative adversarial networks to generate new images of faces.

Get the Data

You'll be using two datasets in this project:

  • MNIST
  • CelebA

Since the celebA dataset is complex and you're doing GANs in a project for the first time, we want you to test your neural network on MNIST before CelebA. Running the GANs on MNIST will allow you to see how well your model trains sooner.

If you're using FloydHub, set data_dir to "/input" and use the FloydHub data ID "R5KrjnANiKVhLWAkpXhNBe".

In [4]:
data_dir = './data'

# FloydHub - Use with data ID "R5KrjnANiKVhLWAkpXhNBe"
#data_dir = '/input'


"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import helper

helper.download_extract('mnist', data_dir)
helper.download_extract('celeba', data_dir)
Found mnist Data
Found celeba Data

Explore the Data

MNIST

As you're aware, the MNIST dataset contains images of handwritten digits. You can view the first number of examples by changing show_n_images.

In [5]:
show_n_images = 25

"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
%matplotlib inline
import os
from glob import glob
from matplotlib import pyplot

mnist_images = helper.get_batch(glob(os.path.join(data_dir, 'mnist/*.jpg'))[:show_n_images], 28, 28, 'L')
pyplot.imshow(helper.images_square_grid(mnist_images, 'L'), cmap='gray')
Out[5]:
<matplotlib.image.AxesImage at 0x7fe74bb485c0>

CelebA

The CelebFaces Attributes Dataset (CelebA) dataset contains over 200,000 celebrity images with annotations. Since you're going to be generating faces, you won't need the annotations. You can view the first number of examples by changing show_n_images.

In [6]:
show_n_images = 25

"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
mnist_images = helper.get_batch(glob(os.path.join(data_dir, 'img_align_celeba/*.jpg'))[:show_n_images], 28, 28, 'RGB')
pyplot.imshow(helper.images_square_grid(mnist_images, 'RGB'))
Out[6]:
<matplotlib.image.AxesImage at 0x7fe74baf43c8>

Preprocess the Data

Since the project's main focus is on building the GANs, we'll preprocess the data for you. The values of the MNIST and CelebA dataset will be in the range of -0.5 to 0.5 of 28x28 dimensional images. The CelebA images will be cropped to remove parts of the image that don't include a face, then resized down to 28x28.

The MNIST images are black and white images with a single color channel while the CelebA images have 3 color channels (RGB color channel).

Build the Neural Network

You'll build the components necessary to build a GANs by implementing the following functions below:

  • model_inputs
  • discriminator
  • generator
  • model_loss
  • model_opt
  • train

Check the Version of TensorFlow and Access to GPU

This will check to make sure you have the correct version of TensorFlow and access to a GPU

In [7]:
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
from distutils.version import LooseVersion
import warnings
import tensorflow as tf

# Check TensorFlow Version
assert LooseVersion(tf.__version__) >= LooseVersion('1.0'), 'Please use TensorFlow version 1.0 or newer.  You are using {}'.format(tf.__version__)
print('TensorFlow Version: {}'.format(tf.__version__))

# Check for a GPU
if not tf.test.gpu_device_name():
    warnings.warn('No GPU found. Please use a GPU to train your neural network.')
else:
    print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))
TensorFlow Version: 1.0.0
Default GPU Device: /gpu:0

Reset Graph

In [8]:
tf.reset_default_graph()

Input

Implement the model_inputs function to create TF Placeholders for the Neural Network. It should create the following placeholders:

  • Real input images placeholder with rank 4 using image_width, image_height, and image_channels.
  • Z input placeholder with rank 2 using z_dim.
  • Learning rate placeholder with rank 0.

Return the placeholders in the following the tuple (tensor of real input images, tensor of z data)

In [9]:
import problem_unittests as tests

def model_inputs(image_width, image_height, image_channels, z_dim):
    """
    Create the model inputs
    :param image_width: The input image width
    :param image_height: The input image height
    :param image_channels: The number of image channels
    :param z_dim: The dimension of Z
    :return: Tuple of (tensor of real input images, tensor of z data, learning rate)
    """
    # TODO: Implement Function
    inputs_real = tf.placeholder(tf.float32, (None, image_width, image_height, image_channels), name='input_real')
    inputs_z = tf.placeholder(tf.float32, (None, z_dim), name='input_z')
    learning_rate = tf.placeholder(tf.float32)    
    
    return inputs_real, inputs_z, learning_rate


"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_model_inputs(model_inputs)
Tests Passed

Generator

Implement generator to generate an image using z. This function should be able to reuse the variables in the neural network. Use tf.variable_scope with a scope name of "generator" to allow the variables to be reused. The function should return the generated 28 x 28 x out_channel_dim images.

In [10]:
def generator(z, out_channel_dim, is_train=True, alpha=0.2):
    """
    Create the generator network
    :param z: Input z
    :param out_channel_dim: The number of channels in the output image
    :param is_train: Boolean if generator is being used for training
    :return: The tensor output of the generator
    """
    # TODO: Implement Function                
    reuse = not is_train    
    
    with tf.variable_scope('generator', reuse=reuse):                    
        # First fully connected layer
        layer = tf.layers.dense(z, 7*7*256, activation=None, use_bias=False)
        # 7*7*256
        layer = tf.reshape(layer, (-1, 7, 7, 256))  # the 1th dimension should be -1
        layer = tf.layers.batch_normalization(layer, axis=-1, training=is_train)
        layer = tf.maximum(alpha * layer, layer)
                
        # 14*14*128
        layer = tf.layers.conv2d_transpose(layer, filters=128, kernel_size=5, strides=2, activation=None, use_bias=False, padding='same')
        layer = tf.layers.batch_normalization(layer, axis=-1, training=is_train)        
        layer = tf.maximum(alpha * layer, layer)

        # 28*28*out_channel_dim
        logits = tf.layers.conv2d_transpose(layer, filters=out_channel_dim, kernel_size=5, strides=2, activation=None, use_bias=True, padding='same')
#         out = tf.tanh(logits)
        # Rui: output should between [-0.5, 0.5]
        out = 0.5 * tf.tanh(logits)
        
        return out    
    
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_generator(generator, tf)
Tests Passed

Discriminator

Implement discriminator to create a discriminator neural network that discriminates on images. This function should be able to reuse the variables in the neural network. Use tf.variable_scope with a scope name of "discriminator" to allow the variables to be reused. The function should return a tuple of (tensor output of the discriminator, tensor logits of the discriminator).

In [11]:
def discriminator(images, reuse=False, alpha=0.2, is_train=True):
    """
    Create the discriminator network
    :param images: Tensor of input image(s)
    :param reuse: Boolean if the weights should be reused
    :return: Tuple of (tensor output of the discriminator, tensor logits of the discriminator)
    """
    # TODO: Implement Function
    with tf.variable_scope('discriminator', reuse=reuse):
        # Input layer is 28x28xout_channel_dim       
        # 14*14*64
        layer = tf.layers.conv2d(images, filters=64, kernel_size=5, strides=2, activation=None, use_bias=True, padding='same')
        layer = tf.maximum(alpha * layer, layer)        

        # 7*7*128
        layer = tf.layers.conv2d(layer, filters=128, kernel_size=5, strides=2, activation=None, use_bias=False, padding='same')
        layer = tf.layers.batch_normalization(layer, axis=-1, training=is_train)        
        layer = tf.maximum(alpha * layer, layer)

        # 4*4*256
        layer = tf.layers.conv2d(layer, filters=256, kernel_size=5, strides=2, activation=None, use_bias=False, padding='same')
        layer = tf.layers.batch_normalization(layer, axis=-1, training=is_train)        
        layer = tf.maximum(alpha * layer, layer)
        
        layer = tf.reshape(layer, (-1, 4*4*256))                
#         layer = tf.layers.flatten(layer)                
        logits = tf.layers.dense(layer, 1, activation=None, use_bias=True)
        out = tf.sigmoid(logits)
                
        return out, logits

"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_discriminator(discriminator, tf)
Tests Passed

Loss

Implement model_loss to build the GANs for training and calculate the loss. The function should return a tuple of (discriminator loss, generator loss). Use the following functions you implemented:

  • discriminator(images, reuse=False)
  • generator(z, out_channel_dim, is_train=True)
In [12]:
def model_loss(input_real, input_z, out_channel_dim, alpha=0.2):
    """
    Get the loss for the discriminator and generator
    :param input_real: Images from the real dataset
    :param input_z: Z input
    :param out_channel_dim: The number of channels in the output image
    :return: A tuple of (discriminator loss, generator loss)
    """
    # TODO: Implement Function
    g_model = generator(input_z, out_channel_dim, alpha=alpha)
    d_model_real, d_logits_real = discriminator(input_real, alpha=alpha)
    d_model_fake, d_logits_fake = discriminator(g_model, reuse=True, alpha=alpha)

    d_loss_real = tf.reduce_mean(
        tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_real, labels=tf.ones_like(d_model_real)))
    d_loss_fake = tf.reduce_mean(
        tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_fake, labels=tf.zeros_like(d_model_fake)))
    g_loss = tf.reduce_mean(
        tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_fake, labels=tf.ones_like(d_model_fake)))

    d_loss = d_loss_real + d_loss_fake

    return d_loss, g_loss

"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_model_loss(model_loss)
Tests Passed

Optimization

Implement model_opt to create the optimization operations for the GANs. Use tf.trainable_variables to get all the trainable variables. Filter the variables with names that are in the discriminator and generator scope names. The function should return a tuple of (discriminator training operation, generator training operation).

In [13]:
def model_opt(d_loss, g_loss, learning_rate, beta1):
    """
    Get optimization operations
    :param d_loss: Discriminator loss Tensor
    :param g_loss: Generator loss Tensor
    :param learning_rate: Learning Rate Placeholder
    :param beta1: The exponential decay rate for the 1st moment in the optimizer
    :return: A tuple of (discriminator training operation, generator training operation)
    """
    # TODO: Implement Function
    # Get weights and bias to update
    t_vars = tf.trainable_variables()
    d_vars = [var for var in t_vars if var.name.startswith('discriminator')]
    g_vars = [var for var in t_vars if var.name.startswith('generator')]

    # Optimize
    with tf.control_dependencies(tf.get_collection(tf.GraphKeys.UPDATE_OPS)):
        d_train_opt = tf.train.AdamOptimizer(learning_rate, beta1=beta1).minimize(d_loss, var_list=d_vars)
        g_train_opt = tf.train.AdamOptimizer(learning_rate, beta1=beta1).minimize(g_loss, var_list=g_vars)

    return d_train_opt, g_train_opt        

"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_model_opt(model_opt, tf)
Tests Passed

Neural Network Training

Show Output

Use this function to show the current output of the generator during training. It will help you determine how well the GANs is training.

In [14]:
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import numpy as np

def show_generator_output(sess, n_images, input_z, out_channel_dim, image_mode, alpha=0.2):
    """
    Show example output for the generator
    :param sess: TensorFlow session
    :param n_images: Number of Images to display
    :param input_z: Input Z Tensor
    :param out_channel_dim: The number of channels in the output image
    :param image_mode: The mode to use for images ("RGB" or "L")
    """
    cmap = None if image_mode == 'RGB' else 'gray'
    z_dim = input_z.get_shape().as_list()[-1]
    example_z = np.random.uniform(-1, 1, size=[n_images, z_dim])

    samples = sess.run(
        generator(input_z, out_channel_dim, is_train=False, alpha=alpha),
        feed_dict={input_z: example_z})

    images_grid = helper.images_square_grid(samples, image_mode)
    pyplot.imshow(images_grid, cmap=cmap)
    pyplot.show()

Train

Implement train to build and train the GANs. Use the following functions you implemented:

  • model_inputs(image_width, image_height, image_channels, z_dim)
  • model_loss(input_real, input_z, out_channel_dim)
  • model_opt(d_loss, g_loss, learning_rate, beta1)

Use the show_generator_output to show generator output while you train. Running show_generator_output for every batch will drastically increase training time and increase the size of the notebook. It's recommended to print the generator output every 100 batches.

In [15]:
def train(epoch_count, batch_size, z_dim, learning_rate_val, beta1, get_batches, data_shape, data_image_mode, alpha=0.2, print_every=10, show_every=100):
    """
    Train the GAN
    :param epoch_count: Number of epochs
    :param batch_size: Batch Size
    :param z_dim: Z dimension
    :param learning_rate: Learning Rate
    :param beta1: The exponential decay rate for the 1st moment in the optimizer
    :param get_batches: Function to get batches
    :param data_shape: Shape of the data
    :param data_image_mode: The image mode to use for images ("RGB" or "L")
    """
    # TODO: Build Model
#     tf.reset_default_graph()

    image_count, image_width, image_height, image_channels = data_shape
    input_real, input_z, learning_rate = model_inputs(image_width, image_height, image_channels, z_dim)

    d_loss, g_loss = model_loss(input_real, input_z, image_channels, alpha=alpha)

    d_opt, g_opt = model_opt(d_loss, g_loss, learning_rate, beta1)

    
    losses = []
    steps = 0
    with tf.Session() as sess:
        sess.run(tf.global_variables_initializer())
        
        for epoch_i in range(epoch_count):
            for batch_images in get_batches(batch_size):
                # TODO: Train Model
                steps += 1
#                 print('epoch', epoch_i+1, 'steps', steps)
                
                # Sample random noise for G
                batch_z = np.random.uniform(-1, 1, size=(batch_size, z_dim))

                # Run optimizers                
                _ = sess.run(d_opt, feed_dict={input_real: batch_images, input_z: batch_z, learning_rate: learning_rate_val})
                _ = sess.run(g_opt, feed_dict={input_z: batch_z, input_real: batch_images, learning_rate: learning_rate_val})

#                 if steps % print_every == 0:                
                if (steps == 1) or (steps % print_every == 0):
                    # At the end of each epoch, get the losses and print them out
                    train_loss_d = d_loss.eval({input_z: batch_z, input_real: batch_images})
                    train_loss_g = g_loss.eval({input_z: batch_z})

                    print("Epoch {}/{}...".format(epoch_i+1, epochs),
                          "steps", steps,
                          "Discriminator Loss: {:.4f}...".format(train_loss_d),
                          "Generator Loss: {:.4f}".format(train_loss_g))
                    # Save losses to view after training
                    losses.append((train_loss_d, train_loss_g))

#                 if steps % show_every == 0:                    
                if (epoch_i == 0 and steps % 100 == 1) or (steps % show_every == 0):                
                    show_generator_output(sess, 8*8, input_z, image_channels, data_image_mode, alpha=alpha)
        
    return losses

MNIST

Test your GANs architecture on MNIST. After 2 epochs, the GANs should be able to generate images that look like handwritten digits. Make sure the loss of the generator is lower than the loss of the discriminator or close to 0.

In [14]:
mnist_dataset = helper.Dataset('mnist', glob(os.path.join(data_dir, 'mnist/*.jpg')))
In [15]:
mnist_dataset.shape
Out[15]:
(60000, 28, 28, 1)
In [16]:
image_count, image_width, image_height, image_channel = mnist_dataset.shape
image_width, image_height, image_channel
Out[16]:
(28, 28, 1)
In [17]:
batch_size = 128
z_dim = 100
learning_rate = 0.0002
beta1 = 0.5
alpha = 0.2


"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
epochs = 2

mnist_dataset = helper.Dataset('mnist', glob(os.path.join(data_dir, 'mnist/*.jpg')))
with tf.Graph().as_default():
    losses = train(epochs, batch_size, z_dim, learning_rate, beta1, mnist_dataset.get_batches,
        mnist_dataset.shape, mnist_dataset.image_mode, 
        alpha=alpha, print_every=20, show_every=300)
Epoch 1/2... steps 1 Discriminator Loss: 1.8694... Generator Loss: 0.1990
Epoch 1/2... steps 20 Discriminator Loss: 0.3597... Generator Loss: 2.6025
Epoch 1/2... steps 40 Discriminator Loss: 1.5196... Generator Loss: 6.1984
Epoch 1/2... steps 60 Discriminator Loss: 0.6827... Generator Loss: 3.1339
Epoch 1/2... steps 80 Discriminator Loss: 0.6287... Generator Loss: 1.3155
Epoch 1/2... steps 100 Discriminator Loss: 0.9158... Generator Loss: 0.7058
Epoch 1/2... steps 120 Discriminator Loss: 0.5820... Generator Loss: 2.0694
Epoch 1/2... steps 140 Discriminator Loss: 0.9127... Generator Loss: 0.7037
Epoch 1/2... steps 160 Discriminator Loss: 0.6198... Generator Loss: 1.8784
Epoch 1/2... steps 180 Discriminator Loss: 1.0512... Generator Loss: 0.5579
Epoch 1/2... steps 200 Discriminator Loss: 0.6942... Generator Loss: 2.7049
Epoch 1/2... steps 220 Discriminator Loss: 0.5252... Generator Loss: 2.2575
Epoch 1/2... steps 240 Discriminator Loss: 0.4475... Generator Loss: 1.6609
Epoch 1/2... steps 260 Discriminator Loss: 0.4783... Generator Loss: 1.4636
Epoch 1/2... steps 280 Discriminator Loss: 0.5083... Generator Loss: 1.2246
Epoch 1/2... steps 300 Discriminator Loss: 0.4880... Generator Loss: 1.3472
Epoch 1/2... steps 320 Discriminator Loss: 0.3548... Generator Loss: 2.2144
Epoch 1/2... steps 340 Discriminator Loss: 0.3682... Generator Loss: 1.8413
Epoch 1/2... steps 360 Discriminator Loss: 0.2979... Generator Loss: 2.2605
Epoch 1/2... steps 380 Discriminator Loss: 0.3867... Generator Loss: 2.6495
Epoch 1/2... steps 400 Discriminator Loss: 0.4145... Generator Loss: 2.0994
Epoch 1/2... steps 420 Discriminator Loss: 0.4478... Generator Loss: 1.7967
Epoch 1/2... steps 440 Discriminator Loss: 0.4825... Generator Loss: 1.3825
Epoch 1/2... steps 460 Discriminator Loss: 0.4238... Generator Loss: 2.7179
Epoch 2/2... steps 480 Discriminator Loss: 0.4130... Generator Loss: 1.8168
Epoch 2/2... steps 500 Discriminator Loss: 1.4201... Generator Loss: 4.6818
Epoch 2/2... steps 520 Discriminator Loss: 1.6563... Generator Loss: 0.3273
Epoch 2/2... steps 540 Discriminator Loss: 0.5655... Generator Loss: 1.2291
Epoch 2/2... steps 560 Discriminator Loss: 0.4629... Generator Loss: 1.6970
Epoch 2/2... steps 580 Discriminator Loss: 0.4568... Generator Loss: 1.7512
Epoch 2/2... steps 600 Discriminator Loss: 0.6128... Generator Loss: 1.0123
Epoch 2/2... steps 620 Discriminator Loss: 0.5297... Generator Loss: 2.2188
Epoch 2/2... steps 640 Discriminator Loss: 0.5138... Generator Loss: 1.9313
Epoch 2/2... steps 660 Discriminator Loss: 0.5405... Generator Loss: 1.1940
Epoch 2/2... steps 680 Discriminator Loss: 1.1483... Generator Loss: 3.7329
Epoch 2/2... steps 700 Discriminator Loss: 0.4422... Generator Loss: 1.6784
Epoch 2/2... steps 720 Discriminator Loss: 0.4936... Generator Loss: 1.5550
Epoch 2/2... steps 740 Discriminator Loss: 0.5635... Generator Loss: 1.3021
Epoch 2/2... steps 760 Discriminator Loss: 0.5144... Generator Loss: 1.2851
Epoch 2/2... steps 780 Discriminator Loss: 0.6354... Generator Loss: 0.9966
Epoch 2/2... steps 800 Discriminator Loss: 0.5895... Generator Loss: 1.0835
Epoch 2/2... steps 820 Discriminator Loss: 0.4392... Generator Loss: 1.4754
Epoch 2/2... steps 840 Discriminator Loss: 0.5696... Generator Loss: 2.4574
Epoch 2/2... steps 860 Discriminator Loss: 0.6956... Generator Loss: 3.0170
Epoch 2/2... steps 880 Discriminator Loss: 0.4605... Generator Loss: 1.4033
Epoch 2/2... steps 900 Discriminator Loss: 0.4068... Generator Loss: 1.5798
Epoch 2/2... steps 920 Discriminator Loss: 0.4099... Generator Loss: 1.5714
In [20]:
%matplotlib inline

import matplotlib.pyplot as plt

losses = np.array(losses)

fig, axes = plt.subplots(2, 1, figsize=(8,8),)
ylabels = ['Discriminator loss', 'Generator loss']
targets = [0.3, 2.5]
for i, ax in zip([0, 1], axes.flatten()):
    ax.plot(losses.T[i])
    ax.plot([targets[i]] * len(losses.T[i]), label='target', linestyle='--')    
    ax.set_ylabel(ylabels[i])
    ax.legend()

CelebA

Run your GANs on CelebA. It will take around 20 minutes on the average GPU to run one epoch. You can run the whole epoch or stop when it starts to generate realistic faces.

In [16]:
batch_size = 128
z_dim = 100
learning_rate = 0.0002
beta1 = 0.5
alpha = 0.2


"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
epochs = 1

celeba_dataset = helper.Dataset('celeba', glob(os.path.join(data_dir, 'img_align_celeba/*.jpg')))
print('celeba_dataset.shape', celeba_dataset.shape)

with tf.Graph().as_default():
    losses = train(epochs, batch_size, z_dim, learning_rate, beta1, celeba_dataset.get_batches,
          celeba_dataset.shape, celeba_dataset.image_mode,
          alpha=alpha, print_every=20, show_every=300)
celeba_dataset.shape (202599, 28, 28, 3)
Epoch 1/1... steps 1 Discriminator Loss: 2.5221... Generator Loss: 0.1040
Epoch 1/1... steps 20 Discriminator Loss: 0.2407... Generator Loss: 2.5324
Epoch 1/1... steps 40 Discriminator Loss: 0.1270... Generator Loss: 3.1013
Epoch 1/1... steps 60 Discriminator Loss: 0.4107... Generator Loss: 3.1439
Epoch 1/1... steps 80 Discriminator Loss: 0.6323... Generator Loss: 1.4351
Epoch 1/1... steps 100 Discriminator Loss: 1.2128... Generator Loss: 0.5618
Epoch 1/1... steps 120 Discriminator Loss: 0.8316... Generator Loss: 3.2106
Epoch 1/1... steps 140 Discriminator Loss: 0.5367... Generator Loss: 1.6383
Epoch 1/1... steps 160 Discriminator Loss: 1.0787... Generator Loss: 0.5663
Epoch 1/1... steps 180 Discriminator Loss: 0.6601... Generator Loss: 1.1795
Epoch 1/1... steps 200 Discriminator Loss: 0.7622... Generator Loss: 0.9573
Epoch 1/1... steps 220 Discriminator Loss: 0.8891... Generator Loss: 0.7391
Epoch 1/1... steps 240 Discriminator Loss: 0.6833... Generator Loss: 1.1872
Epoch 1/1... steps 260 Discriminator Loss: 0.5700... Generator Loss: 1.5838
Epoch 1/1... steps 280 Discriminator Loss: 1.0696... Generator Loss: 0.6009
Epoch 1/1... steps 300 Discriminator Loss: 1.3684... Generator Loss: 2.3908
Epoch 1/1... steps 320 Discriminator Loss: 1.0533... Generator Loss: 2.2965
Epoch 1/1... steps 340 Discriminator Loss: 1.0172... Generator Loss: 1.6502
Epoch 1/1... steps 360 Discriminator Loss: 0.9186... Generator Loss: 0.7139
Epoch 1/1... steps 380 Discriminator Loss: 0.8219... Generator Loss: 1.2673
Epoch 1/1... steps 400 Discriminator Loss: 1.0099... Generator Loss: 0.6388
Epoch 1/1... steps 420 Discriminator Loss: 0.7264... Generator Loss: 1.1022
Epoch 1/1... steps 440 Discriminator Loss: 1.0415... Generator Loss: 0.6772
Epoch 1/1... steps 460 Discriminator Loss: 0.9986... Generator Loss: 1.3640
Epoch 1/1... steps 480 Discriminator Loss: 1.1657... Generator Loss: 0.7402
Epoch 1/1... steps 500 Discriminator Loss: 0.8230... Generator Loss: 1.0172
Epoch 1/1... steps 520 Discriminator Loss: 0.8944... Generator Loss: 0.7546
Epoch 1/1... steps 540 Discriminator Loss: 1.0040... Generator Loss: 0.7084
Epoch 1/1... steps 560 Discriminator Loss: 1.0038... Generator Loss: 0.9780
Epoch 1/1... steps 580 Discriminator Loss: 0.9614... Generator Loss: 1.1045
Epoch 1/1... steps 600 Discriminator Loss: 0.9464... Generator Loss: 0.7304
Epoch 1/1... steps 620 Discriminator Loss: 0.8147... Generator Loss: 1.5928
Epoch 1/1... steps 640 Discriminator Loss: 0.7170... Generator Loss: 1.3423
Epoch 1/1... steps 660 Discriminator Loss: 0.9866... Generator Loss: 0.7674
Epoch 1/1... steps 680 Discriminator Loss: 0.7682... Generator Loss: 1.5821
Epoch 1/1... steps 700 Discriminator Loss: 0.9737... Generator Loss: 0.8539
Epoch 1/1... steps 720 Discriminator Loss: 0.7517... Generator Loss: 1.4999
Epoch 1/1... steps 740 Discriminator Loss: 0.8186... Generator Loss: 1.4613
Epoch 1/1... steps 760 Discriminator Loss: 0.9705... Generator Loss: 1.6252
Epoch 1/1... steps 780 Discriminator Loss: 0.8781... Generator Loss: 1.0030
Epoch 1/1... steps 800 Discriminator Loss: 0.8430... Generator Loss: 1.0671
Epoch 1/1... steps 820 Discriminator Loss: 0.7166... Generator Loss: 1.5122
Epoch 1/1... steps 840 Discriminator Loss: 1.1821... Generator Loss: 1.1745
Epoch 1/1... steps 860 Discriminator Loss: 0.5640... Generator Loss: 1.5754
Epoch 1/1... steps 880 Discriminator Loss: 0.6526... Generator Loss: 1.2916
Epoch 1/1... steps 900 Discriminator Loss: 1.0298... Generator Loss: 2.0823
Epoch 1/1... steps 920 Discriminator Loss: 0.6265... Generator Loss: 1.3536
Epoch 1/1... steps 940 Discriminator Loss: 0.3491... Generator Loss: 2.1710
Epoch 1/1... steps 960 Discriminator Loss: 0.6443... Generator Loss: 3.1477
Epoch 1/1... steps 980 Discriminator Loss: 0.8988... Generator Loss: 4.5280
Epoch 1/1... steps 1000 Discriminator Loss: 0.8412... Generator Loss: 2.7325
Epoch 1/1... steps 1020 Discriminator Loss: 0.7997... Generator Loss: 2.8498
Epoch 1/1... steps 1040 Discriminator Loss: 0.2399... Generator Loss: 2.3117
Epoch 1/1... steps 1060 Discriminator Loss: 0.2276... Generator Loss: 5.0620
Epoch 1/1... steps 1080 Discriminator Loss: 0.1314... Generator Loss: 4.5984
Epoch 1/1... steps 1100 Discriminator Loss: 0.1575... Generator Loss: 3.1922
Epoch 1/1... steps 1120 Discriminator Loss: 0.3066... Generator Loss: 2.1473
Epoch 1/1... steps 1140 Discriminator Loss: 0.4425... Generator Loss: 1.3073
Epoch 1/1... steps 1160 Discriminator Loss: 0.4766... Generator Loss: 1.4249
Epoch 1/1... steps 1180 Discriminator Loss: 0.1401... Generator Loss: 2.4966
Epoch 1/1... steps 1200 Discriminator Loss: 0.1929... Generator Loss: 6.3349
Epoch 1/1... steps 1220 Discriminator Loss: 0.7494... Generator Loss: 5.6012
Epoch 1/1... steps 1240 Discriminator Loss: 0.0395... Generator Loss: 8.5723
Epoch 1/1... steps 1260 Discriminator Loss: 0.3966... Generator Loss: 1.3730
Epoch 1/1... steps 1280 Discriminator Loss: 0.0332... Generator Loss: 6.4345
Epoch 1/1... steps 1300 Discriminator Loss: 0.1852... Generator Loss: 4.2412
Epoch 1/1... steps 1320 Discriminator Loss: 0.3879... Generator Loss: 1.3749
Epoch 1/1... steps 1340 Discriminator Loss: 0.0949... Generator Loss: 6.0841
Epoch 1/1... steps 1360 Discriminator Loss: 0.0762... Generator Loss: 4.8114
Epoch 1/1... steps 1380 Discriminator Loss: 0.5277... Generator Loss: 1.0910
Epoch 1/1... steps 1400 Discriminator Loss: 0.0443... Generator Loss: 13.2276
Epoch 1/1... steps 1420 Discriminator Loss: 0.0226... Generator Loss: 6.9192
Epoch 1/1... steps 1440 Discriminator Loss: 1.1209... Generator Loss: 0.4847
Epoch 1/1... steps 1460 Discriminator Loss: 0.1269... Generator Loss: 2.9755
Epoch 1/1... steps 1480 Discriminator Loss: 0.1479... Generator Loss: 2.6538
Epoch 1/1... steps 1500 Discriminator Loss: 0.0123... Generator Loss: 6.1151
Epoch 1/1... steps 1520 Discriminator Loss: 0.0683... Generator Loss: 6.9658
Epoch 1/1... steps 1540 Discriminator Loss: 0.9984... Generator Loss: 8.4419
Epoch 1/1... steps 1560 Discriminator Loss: 0.2050... Generator Loss: 8.4918
Epoch 1/1... steps 1580 Discriminator Loss: 1.2812... Generator Loss: 0.4069
In [18]:
%matplotlib inline

import matplotlib.pyplot as plt

losses = np.array(losses)

fig, axes = plt.subplots(2, 1, figsize=(8,8),)
ylabels = ['Discriminator loss', 'Generator loss']
targets = [0.3, 2.5]
for i, ax in zip([0, 1], axes.flatten()):
    ax.plot(losses.T[i])
    ax.plot([targets[i]] * len(losses.T[i]), label='target', linestyle='--')    
    ax.set_ylabel(ylabels[i])
    ax.legend()

Submitting This Project

When submitting this project, make sure to run all the cells before saving the notebook. Save the notebook file as "dlnd_face_generation.ipynb" and save it as a HTML file under "File" -> "Download as". Include the "helper.py" and "problem_unittests.py" files in your submission.